skip to main content


Search for: All records

Creators/Authors contains: "Lynch, Nancy"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Animals flexibly select actions that maximize future rewards despite facing uncertainty in sen- sory inputs, action-outcome associations or contexts. The computational and circuit mechanisms underlying this ability are poorly understood. A clue to such computations can be found in the neural systems involved in representing sensory features, sensorimotor-outcome associations and contexts. Specifically, the basal ganglia (BG) have been implicated in forming sensorimotor-outcome association [1] while the thalamocortical loop between the prefrontal cortex (PFC) and mediodorsal thalamus (MD) has been shown to engage in contextual representations [2, 3]. Interestingly, both human and non-human animal experiments indicate that the MD represents different forms of uncertainty [3, 4]. However, finding evidence for uncertainty representation gives little insight into how it is utilized to drive behavior. Normative theories have excelled at providing such computational insights. For example, de- ploying traditional machine learning algorithms to fit human decision-making behavior has clarified how associative uncertainty alters exploratory behavior [5, 6]. However, despite their computa- tional insight and ability to fit behaviors, normative models cannot be directly related to neural mechanisms. Therefore, a critical gap exists between what we know about the neural representa- tion of uncertainty on one end and the computational functions uncertainty serves in cognition. This gap can be filled with mechanistic neural models that can approximate normative models as well as generate experimentally observed neural representations. In this work, we build a mechanistic cortico-thalamo-BG loop network model that directly fills this gap. The model includes computationally-relevant mechanistic details of both BG and thalamocortical circuits such as distributional activities of dopamine [7] and thalamocortical pro- jection modulating cortical effective connectivity [3] and plasticity [8] via interneurons. We show that our network can more efficiently and flexibly explore various environments compared to com- monly used machine learning algorithms and we show that the mechanistic features we include are crucial for handling different types of uncertainty in decision-making. Furthermore, through derivation and mathematical proofs, we approximate our models to two novel normative theories. We show mathematically the first has near-optimal performance on bandit tasks. The second is a generalization on the well-known CUMSUM algorithm, which is known to be optimal on single change point detection tasks [9]. Our normative model expands on this by detecting multiple sequential contextual changes. To our knowledge, our work is the first to link computational in- sights, normative models and neural realization together in decision-making under various forms of uncertainty. 
    more » « less
    Free, publicly-accessible full text available February 18, 2025
  2. We continue our study from [5], of how concepts that have hierarchical structure might be represented in brain-like neural networks, how these representations might be used to recognize the concepts, and how these representations might be learned. In [5], we considered simple tree-structured concepts and feed-forward layered networks. Here we extend the model in two ways: we allow limited overlap between children of different concepts, and we allow networks to include feedback edges. For these more general cases, we describe and analyze algorithms for recognition and algorithms for learning. 
    more » « less
    Free, publicly-accessible full text available June 6, 2024
  3. Decision making in natural settings requires efficient exploration to handle uncertainty. Since associations between actions and outcomes are uncertain, animals need to balance the explorations and exploitation to select the actions that lead to maximal rewards. The computa- tional principles by which animal brains explore during decision-making are poorly understood. Our challenge here was to build a biologically plausible neural network that efficiently explores an environment and understands its effectiveness mathematically. One of the most evolutionarily conserved and important systems in decision making is basal ganglia (BG)1. In particular, the dopamine activities (DA) in BG is thought to represent reward prediction error (RPE) to facilitate reinforcement learning2. Therefore, our starting point is a cortico-BG loop motif3. This network adjusts exploration based on neuronal noises and updates its value estimate through RPE. To account for the fact that animals adjust exploration based on experience, we modified the network in two ways. First, it is recently discovered that DA does not simply represent the scalar RPE value; rather it represents RPE in a distribution4. We incorporated the distributional RPE framework and further the hypothesis, allowing an RPE distribution to update the posterior of action values encoded by cortico-BG connections. Second, it is known that the firing in the layer 2/3 of cortex fires is variable and sparse5. Our network thus included a random sparsification of cortical activity as a mechanism of sampling from this posterior for experience-based exploration. Combining these two features, our network is able to take the uncertainty of our value estimates into account to accomplish efficient exploration in a variety of environments. 
    more » « less
    Free, publicly-accessible full text available June 23, 2024
  4. Task allocation is an important problem for robot swarms to solve, allowing agents to reduce task completion time by performing tasks in a distributed fashion. Existing task allocation algorithms often assume prior knowledge of task location and demand or fail to consider the effects of the geometric distribution of tasks on the completion time and communication cost of the algorithms. In this paper, we examine an environment where agents must explore and discover tasks with positive demand and successfully assign themselves to complete all such tasks. We first provide a new discrete general model for modeling swarms. Operating within this theoretical framework, we propose two new task allocation algorithms for initially unknown environments – one based on N-site selection and the other on virtual pheromones. We analyze each algorithm separately and also evaluate the effectiveness of the two algorithms in dense vs. sparse task distributions. Compared to the Levy walk, which has been theorized to be optimal for foraging, our virtual pheromone inspired algorithm is much faster in sparse to medium task densities but is communication and agent intensive. Our site selection inspired algorithm also outperforms Levy walk in sparse task densities and is a less resource-intensive option than our virtual pheromone algorithm for this case. Because the performance of both algorithms relative to random walk is dependent on task density, our results shed light on how task density is important in choosing a task allocation algorithm in initially unknown environments. 
    more » « less
    Free, publicly-accessible full text available May 29, 2024
  5. Due to the increasing complexity of robot swarm algorithms, ana- lyzing their performance theoretically is often very difficult. Instead, simulators are often used to benchmark the performance of robot swarm algorithms. However, we are not aware of simulators that take advantage of the naturally highly parallel nature of distributed robot swarms. This paper presents ParSwarm, a parallel C++ frame- work for simulating robot swarms at scale on multicore machines. We demonstrate the power of ParSwarm by implementing two applications, task allocation and density estimation, and running simulations on large numbers of agents. 
    more » « less
    Free, publicly-accessible full text available June 19, 2024
  6. We continue our study from Lynch and Mallmann-Trenn (Neural Networks, 2021), of how concepts that have hierarchical structure might be represented in brain-like neural networks, how these representations might be used to recognize the concepts, and how these representations might be learned. In Lynch and Mallmann-Trenn (Neural Networks, 2021), we considered simple tree-structured concepts and feed-forward layered networks. Here we extend the model in two ways: we allow limited overlap between children of different concepts, and we allow networks to include feedback edges. For these more general cases, we describe and analyze algorithms for recognition and algorithms for learning. 
    more » « less
  7. Task allocation is an important problem for robot swarms to solve, allowing agents to reduce task completion time by performing tasks in a distributed fashion. Existing task allocation algorithms often assume prior knowledge of task location and demand or fail to consider the effects of the geometric distribution of tasks on the completion time and communication cost of the algorithms. In this paper, we examine an environment where agents must explore and discover tasks with positive demand and successfully assign themselves to complete all such tasks. We first provide a new dis- crete general model for modeling swarms. Operating within this theoretical framework, we propose two new task allocation algo- rithms for initially unknown environments – one based on N-site selection and the other on virtual pheromones. We analyze each algorithm separately and also evaluate the effectiveness of the two algorithms in dense vs. sparse task distributions. Compared to the Levy walk, which has been theorized to be optimal for foraging, our virtual pheromone inspired algorithm is much faster in sparse to medium task densities but is communication and agent intensive. Our site selection inspired algorithm also outperforms Levy walk in sparse task densities and is a less resource-intensive option than our virtual pheromone algorithm for this case. Because the perfor- mance of both algorithms relative to random walk is dependent on task density, our results shed light on how task density is impor- tant in choosing a task allocation algorithm in initially unknown environments. 
    more » « less
  8. Decision making in natural settings requires efficient exploration to handle uncertainty. Since associations between actions and outcomes are uncertain, animals need to balance the explorations and exploitation to select the actions that lead to maximal rewards. The computa- tional principles by which animal brains explore during decision-making are poorly understood. Our challenge here was to build a biologically plausible neural network that efficiently explores an environment and understands its effectiveness mathematically. One of the most evolutionarily conserved and important systems in decision making is basal ganglia (BG)1. In particular, the dopamine activities (DA) in BG is thought to represent reward prediction error (RPE) to facilitate reinforcement learning2. Therefore, our starting point is a cortico-BG loop motif3. This network adjusts exploration based on neuronal noises and updates its value estimate through RPE. To account for the fact that animals adjust exploration based on experience, we modified the network in two ways. First, it is recently discovered that DA does not simply represent the scalar RPE value; rather it represents RPE in a distribution4. We incorporated the distributional RPE framework and further the hypothesis, allowing an RPE distribution to update the posterior of action values encoded by cortico-BG connections. Second, it is known that the firing in the layer 2/3 of cortex fires is variable and sparse5. Our network thus included a random sparsification of cortical activity as a mechanism of sampling from this posterior for experience-based exploration. Combining these two features, our network is able to take the uncertainty of our value estimates into account to accomplish efficient exploration in a variety of environments. 
    more » « less
  9. The house hunting behavior of the Temnothorax albipennis ant allows the colony to explore several nest choices and agree on the best one. Their behavior serves as the basis for many bio-inspired swarm models to solve the same problem. However, many of the existing site selection models in both insect colony and swarm literature test the model’s accuracy and decision time only on setups where all potential site choices are equidistant from the swarm’s starting location. These models do not account for the geographic challenges that result from site choices with different geometry. For example, although actual ant colonies are capable of consistently choosing a higher quality, further site instead of a lower quality, closer site, existing models are much less accurate in this scenario. Existing models are also more prone to committing to a low quality site if it is on the path between the agents’ starting site and a higher quality site. We present a new model for the site selection problem and verify via simulation that is able to better handle these geographic challenges. Our results provide insight into the types of challenges site selection models face when distance is taken into account. Our work will allow swarms to be robust to more realistic situations where sites could be distributed in the environment in many different ways. 
    more » « less